134 research outputs found

    Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks

    Get PDF
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain

    Understanding Associative and Cognitive Explanations in Comparative Psychology

    Get PDF

    The Comparative Psychology of Artificial Intelligences

    Get PDF
    The last five years have seen a series of remarkable achievements in Artificial Intelligence (AI) research. For example, systems based on Deep Neural Networks (DNNs) can now classify natural images as well or better than humans, defeat human grandmasters in strategy games as complex as chess, Go, or Starcraft II, and navigate autonomous vehicles across thousands of miles of mixed terrain. I here examine three ways in which DNNs are alleged to fall short of human intelligence: that their training is too data-hungry, that they are vulnerable to adversarial examples, and that their processing is not interpretable. I argue that these criticisms are subject to comparative bias, which must be overcome for comparisons of DNNs and humans to be meaningful. I suggest that AI would benefit here by learning from more mature methodological debates in comparative psychology concerning how to conduct fair comparisons between different kinds of intelligences

    The Semantic Problem(s) with Research on Animal Mind‐Reading

    Get PDF
    Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants in this debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreement

    Transitional Gradation in the Mind: Rethinking Psychological Kindhood

    Get PDF
    I here critique the application of the traditional, similarity-based account of natural kinds to debates in psychology. A challenge to such accounts of kindhood—familiar from the study of biological species—is a metaphysical phenomenon that I call ‘transitional gradation’: the systematic progression of slightly modified transitional forms between related candidate kinds. Where such gradation proliferates, it renders the selection of similarity criteria for kinds arbitrary. Reflection on general features of learning—especially on the gradual revision of concepts throughout the acquisition of expertise—shows that even the strongest candidates for similarity-based kinds in psychology exhibit systematic transitional gradation. As a result, philosophers of psychology should abandon discussion of kindhood, or explore non-similarity based accounts

    The Comparative Psychology of Artificial Intelligences

    Get PDF
    The last five years have seen a series of remarkable achievements in Artificial Intelligence (AI) research. For example, systems based on Deep Neural Networks (DNNs) can now classify natural images as well or better than humans, defeat human grandmasters in strategy games as complex as chess, Go, or Starcraft II, and navigate autonomous vehicles across thousands of miles of mixed terrain. I here examine three ways in which DNNs are alleged to fall short of human intelligence: that their training is too data-hungry, that they are vulnerable to adversarial examples, and that their processing is not interpretable. I argue that these criticisms are subject to comparative bias, which must be overcome for comparisons of DNNs and humans to be meaningful. I suggest that AI would benefit here by learning from more mature methodological debates in comparative psychology concerning how to conduct fair comparisons between different kinds of intelligences

    From Deep Learning to Rational Machines -- What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence -- Sample Chapter 1 -- "Moderate Empiricism and Machine Learning"

    Get PDF
    This book provides a framework for thinking about foundational philosophical questions surrounding the use of deep artificial neural networks (“deep learning”) to achieve artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learning’s current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the extraction of abstract knowledge from perceptual experience involves the active engagement of general psychological faculties—such as perception, memory, imagination, attention, and empathy. This book explains how recent headline-grabbing deep learning achievements were enabled by adding functionality to these networks that model forms of processing attributed to these faculties by philosophers such as Aristotle, Ibn Sina (Avicenna), John Locke, David Hume, William James, and Sophie de Grouchy. It illustrates the utility of this interdisciplinary connection by showing how it can provide benefits to both philosophy and computer science: computer scientists can continue to mine the history of philosophy for ideas and aspirational targets to hit on the way to building more robustly rational artificial agents, and philosophers can see how some of the historical empiricists’ most ambitious speculations can now be realized in specific computational systems

    A property cluster theory of cognition

    Get PDF
    Our prominent definitions of cognition are too vague and lack empirical grounding. They have not kept up with recent developments, and cannot bear the weight placed on them across many different debates. I here articulate and defend a more adequate theory. On this theory, behaviors under the control of cognition tend to display a cluster of characteristic properties, a cluster which tends to be absent from behaviors produced by non-cognitive processes. This cluster is reverse-engineered from the empirical tests that comparative psychologists use to determine whether a behavior was generated by a cognitive or a non-cognitive process. Cognition should be understood as the natural kind of psychological process that non-accidentally exhibits the properties assessed by these tests (as well as others we have not yet discovered). Finally, I review two plausible neural accounts of cognition's underlying mechanisms—one based in localization of function to particular brain regions and another based in the more recent distributed networks approach to neuroscience—which would explain why these properties non-accidentally cluster. While this notion of cognition may be useful for a number of debates, I here focus on its application to a recent crisis over the distinction between cognition and association in comparative psychology

    The Comparative Psychology of Artificial Intelligences

    Get PDF
    The last five years have seen a series of remarkable achievements in Artificial Intelligence (AI) research. For example, systems based on Deep Neural Networks (DNNs) can now classify natural images as well or better than humans, defeat human grandmasters in strategy games as complex as chess, Go, or Starcraft II, and navigate autonomous vehicles across thousands of miles of mixed terrain. I here examine three ways in which DNNs are alleged to fall short of human intelligence: that their training is too data-hungry, that they are vulnerable to adversarial examples, and that their processing is not interpretable. I argue that these criticisms are subject to comparative bias, which must be overcome for comparisons of DNNs and humans to be meaningful. I suggest that AI would benefit here by learning from more mature methodological debates in comparative psychology concerning how to conduct fair comparisons between different kinds of intelligences
    • 

    corecore